The naive approach
Tasked with representing some big data, let's see if blender can handle it. Here is some exploring first.
adjusting the pixel on the last two lines above takes the most time. This image shows what the result is, zoomed in.
baby steps
This is relatively fast, but it's only 120 pixels in total. Try changing to 400*300 and you can expect it to take a lot longer, far too long to scale for big data.results in something profoundly uninteresting If that isn't a good method, then perhaps construct the data and overwrite image_object in one go. You'll probably want to make sure the dimensions make sense.
What we know - end of naive
with a 40*30 image, i don't expect to notice much time difference but i'll know if the operation is possible.dm = [(1.0) for i in range(4800)] bpy.data.images['pixeltest'].pixels = dm # turns them all white, so maybe try constructing the array first, then assigning.This leads to a much faster way of pushing pixels. First create the image, then the array, then modify the array, then overwrite the image with the array data. The snippet below overwrites with a dark gray.
And it seems that the speed is now closer to acceptable, here is a version that does a 400*300 px overwrite. 4000*3000 will still take 10 seconds or so (on 2.4ghz 2core) but that's not too bad.